Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians

J Med Internet Res. 2020 Jun 19;22(6):e15154. doi: 10.2196/15154.

Abstract

Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable-though imperfect-clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians' use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use.

Keywords: FDA policy; bias; health care; human-AI collaboration; technology adoption; trust.

MeSH terms

  • Artificial Intelligence / standards*
  • Delivery of Health Care / ethics*
  • Humans
  • Trust / psychology*